35 research outputs found
Cortex Inspired Learning to Recover Damaged Signal Modality with ReD-SOM Model
Recent progress in the fields of AI and cognitive sciences opens up new
challenges that were previously inaccessible to study. One of such modern tasks
is recovering lost data of one modality by using the data from another one. A
similar effect (called the McGurk Effect) has been found in the functioning of
the human brain. Observing this effect, one modality of information interferes
with another, changing its perception. In this paper, we propose a way to
simulate such an effect and use it to reconstruct lost data modalities by
combining Variational Auto-Encoders, Self-Organizing Maps, and Hebb connections
in a unified ReD-SOM (Reentering Deep Self-organizing Map) model. We are
inspired by human's capability to use different zones of the brain in different
modalities, in case of having a lack of information in one of the modalities.
This new approach not only improves the analysis of ambiguous data but also
restores the intended signal! The results obtained on the multimodal dataset
demonstrate an increase of quality of the signal reconstruction. The effect is
remarkable both visually and quantitatively, specifically in presence of a
significant degree of signal's distortion.Comment: 9 pages, 8 images, unofficial version, currently under revie
Brain-inspired self-organization with cellular neuromorphic computing for multimodal unsupervised learning
Cortical plasticity is one of the main features that enable our ability to
learn and adapt in our environment. Indeed, the cerebral cortex self-organizes
itself through structural and synaptic plasticity mechanisms that are very
likely at the basis of an extremely interesting characteristic of the human
brain development: the multimodal association. In spite of the diversity of the
sensory modalities, like sight, sound and touch, the brain arrives at the same
concepts (convergence). Moreover, biological observations show that one
modality can activate the internal representation of another modality when both
are correlated (divergence). In this work, we propose the Reentrant
Self-Organizing Map (ReSOM), a brain-inspired neural system based on the
reentry theory using Self-Organizing Maps and Hebbian-like learning. We propose
and compare different computational methods for unsupervised learning and
inference, then quantify the gain of the ReSOM in a multimodal classification
task. The divergence mechanism is used to label one modality based on the
other, while the convergence mechanism is used to improve the overall accuracy
of the system. We perform our experiments on a constructed written/spoken
digits database and a DVS/EMG hand gestures database. The proposed model is
implemented on a cellular neuromorphic architecture that enables distributed
computing with local connectivity. We show the gain of the so-called hardware
plasticity induced by the ReSOM, where the system's topology is not fixed by
the user but learned along the system's experience through self-organization.Comment: Preprin
Self-Organizing Machine Architecture
International audienceSOMA is a France-Switzerland collaborative project which aims to develop a computing machine with self-organizing properties inspired by the functioning of the brain. The SOMA project addresses this challenge by lying at the intersection of four main research fields, namely adaptive reconfigurable computing, cellular computing, computational neuroscience, and neuromorphic engineering. In the framework of SOMA, we designed the SCALP platform, a 3D array of FPGAs and processors permitting to prototype and evaluate self-organization mechanisms on physical cellular machines
Malvoyance: des lunettes pour raconter le monde
article dans Nice Matin page Sant
Information coding and hardware architecture of spiking neural networks
International audienceInspired from the brain, neuromorphic computing would be the right alternative to traditional Von-Neumann architecture computing that knows its end of growth as predicted by Moore’s law. In this paper, we explore bio-inspired neural networks as an AI-accelerator for embedded systems. To do so, we first map neural networks from formal to spiking domain, then choose the information coding method resulting in better performances. Afterwards, we present the design of two different hardware architectures: time-multiplexed and fully-parallel. Finally, we compare their performances and their hardware costto select at the end the adequate architecture and conclude about spike-based neural networks as a potential solution for embedded artificial intelligence applications
Toward a sparse self-organizing map for neuromorphic architectures
International audienceNeuro-biological systems have often been a source of inspiration for computational science and engineering,but in the past their impact has also been limited by the understanding of biological models. Today, newtechnologies lead to an equilibrium situation where powerful and complex computers bring new biologicalknowledge of the brain behavior. At this point, we possess sufficient understanding to both imagine newbrain-inspired computing paradigms and to sustain a classical paradigm which reaches its end program-ming and intellectual limitations.In this context we propose to reconsider the computation problem first in the specific domain of mobilerobotics. Our main proposal consists in considering computation as part of a global adaptive system, com-posed of sensors, actuators, a source of energy and a controlling unit. During the adaptation process, theproposed brain-inspired computing structure does not only execute the tasks of the application but also re-acts to the external stimulation and acts on the emergent behavior of the system. This approach is inspiredby cortical plasticity in mammalian brains and suggests developing the computation architecture along thesystem’s experience.This paper proposes modeling this plasticity as a problem of estimating a probability density function. Thisfunction would correspond to the nature and the richness of the environment perceived through multiplemodalities. We define and develop a novel neural model solving the problem in a distributed and sparsemanner. And we integrate this neural map into a bio-inspired hardware substrate that brings the plasticityproperty into parallel many-core architectures. The approach is then called Hardware Plasticity. The resultsshow that the self-organization properties of our model solve the problem of multimodal sensory data clusterization. The properties of the proposed model allow envisaging the deployment of this adaptation layerinto hardware architectures embedded into the robot’s body in order to build intelligent controllers
Architectural exploration of hardware Spiking Neural Networks integrating Non-Volatile Memories
Encadré par Benoit MiramondA simulator for hardware Spiking Neural Networks have been developed, in order to evaluate different implementation architectures in terms of processing latency, energy consumption, and chip surface. This simulator integrates different types of architectures, memory units distribution and memory technologies, in order to find which configuration would suit thebest to intelligent and autonomous embedded systems. Our simulator is a first step into architectural exploration of Spiking Neural Network implementations, as it brings coarse but coherent estimations for five different possible hardwarearchitectures. Estimations for all available architectures on an example network are given, and we show how they seem coherent when compared with each other. Lastly, some additions and modifications are still to be done in order to achieve a better accuracy and reliability of latency, surface and energy consumption estimations